Homework 2

Name:

Collaborators:

due Saturday 10/17/2015 before 23:55

Question 1: Fixed Point Method

The fixed point method is an iterative algorihtm usually used to solve algebraic equations. In adition, it is a powerful theoretical tool to find (and show existence and uniqueness) solutions to ODE's (normally called Picard-Lindelof Theorem, https://en.wikipedia.org/wiki/Picard–Lindelöf_theorem)

1.a You will implement the fixed point method, for an input function $f$, an initial guess $p0$, a tolerance $\epsilon$, and a maximum number of iterations $Nmax$. Your function will have an optional paremeter named $history$, which by default will be false.

By default, the function will output the final guess, $p$, with a tolerance $\epsilon$. In this case we follow Alg. 2.2 in your textbook. The tolerance is achieved when $|p_{n+1} - p_n | < \epsilon$.

If history is true, then the output will be a vector with all the intermediary values of the fixed point iteration. The length of the vector will be the number of iterations needed to converge (up to a tolerance $\epsilon$). (you can use the push! function in Julia, to initialize an empty vector you can use a = Float[]).

If the tolerance is not achieved in the Nmax iterations you will raise an error, with a descriptive message, using the function error .


In [ ]:
function fixedPointMethod(f,p0, ϵ, Nmax; history=false)
    # define your variables
    for i = 1:Nmax
        #fill the body of the iteration
    end    
end

1.b Write a small function that computes $\sqrt{2}$ using your fixedPointMethod with $f(x) = x/2 + 1/x $, up to a tolerance of $10^{-6}$. You can pick any initial guess $p0 \in [1,2]$.

The main idea is to define the function $f$ within computeSqrt2FixedPoint() and use fixedPointMethod to find $\sqrt{2}$.


In [ ]:
function computeSqrt2FixedPoint()
    # write the body of the function
    return answer 
end

1.c Using the Fixed Point method and the history parameter, plot the error of the succesive approximations of $\sqrt{2}$ (using Gadfly and a semilog scale). What is the convergence rate in this case?

Hint: In order to get a semi-log scale you need to use plot( x = , y = , Scale.y_log10), remember that x and y need to have the same length. Theorem 2.4 and Corollary 2.5 in your textbook may help you to explain the behavior of the error plot.


In [ ]:
## write your script in here

Answer:

1.d The iteration method does not converge for every $f$; find a counter-example; i.e. a function and a initial guess for which the iteration method diverges. Write a small script that shows that your method does not converge and explain why.

Hint: take a look at you book, you have some theorems that guaratees the convergence of the method. Try to find a function that violates the hypothesis of the theorems.


In [ ]:
# write your script here

Answer:

Question 2: Newton's Method

Another popular method for finding zeros of functions is the Newton Method. In particualr, the Newton's method (or variants) are widely used within the optimization community to find extremal points.

2.a You will implement the Newton method with an input function $f$ with derivative $dfdx$, initial guess $p0$, tolerance $\epsilon$, as defined in Alg. 2.3, and maximun number of iterations $Nmax$. The parameter $history$ will have the same behavior as in the fixed point method; if the method does not converge to the desidered tolerance in $Nmax$ iterations, you will raise an error.


In [ ]:
function newtonMethod(f,dfdx, p0, ϵ, Nmax; history=false)
    # define your variables
    for i = 1:Nmax
        #fill the body of the iteration
    end
    return p
    
end

2.b In a manner analogous to the fixedPointMethod, write a small function that computes $\sqrt{2}$. Your function will use the newtonMethod (above) to compute the zero of $f(x) = -x/2 + 1/x$, up to a tolerance of $10^{-6}$. Pick any initial guess $p0 \in [1,2].$


In [ ]:
function computeSqrt2Newton()
    # write the body of the function
    return answer 
end

2.c Using the Newton method and the history parameter, plot the error of the succesive approximations of $\sqrt{2}$ (use Gadlfy and a semilog scale). What is the convergence rate in this case, and how does it compare to the fixed point method?


In [ ]:
# write your script here

Answer:

2.d As you have seen in class, the Newton method is sensible to the initial guess; find an function $f$ and two initial guesses such that the Newton's method converges to a different limit, write a script that shows this behavior and explain (concisely) why.


In [ ]:
# write your script here

Answer:

Question 3: Companion Matrix method

You have seen how to use iterative methods to compute roots of algebraic equations. However, when you are dealing with polynomials you have faster methods. One of the best methods is the Companion Matrix method. The method is based on the Cayley-Hamilton theorem (https://en.wikipedia.org/wiki/Cayley–Hamilton_theorem).

The method transforms a root-finding problem to a eigenvalue problem.

The main idea can be summarized as follows: for a given polynomail $p(x)$ you find a matrix $M$ (usually called the companion matrix: https://en.wikipedia.org/wiki/Companion_matrix) whose characteristic polynomial is $p(x)$, and you compute the eigenvalues of $M$. By the Cayley-Hamilton theorem, the eigenvalues of the companion matrix, $M$, are exactly the roots of $p(x)$.

Let $p(x) = \sum_{i=0}^n \alpha_i x^i$ be a polynomial, such that $\alpha_n \neq 0$, then its companion matrix, which is a $n\times n$ matrix, is given by

\begin{equation} M = \left [ \begin{array}{cccccc} 0 & 0 & 0 & 0 & -\alpha_0/\alpha_n \\ 1 & 0 & 0 & 0 & -\alpha_1/\alpha_n \\ 0 & 1 & \ddots & 0 & \vdots \\ 0 & 0 & \ddots & 0 & -\alpha_{n-2}/\alpha_n \\ 0 & 0 & 0 & 1 & -\alpha_{n-1}/\alpha_n \\ \end{array} \right ]. \end{equation}

You can show that the characteristic polynomial of $M$ is exactly $p$.

Example

We want to compute $\sqrt{2}$, which is one of the roots of $p(x) = x^2 - 2$. The companion matrix is given by \begin{equation} M = \left [ \begin{array}{cc} 0 & 2 \\ 1 & 0 \end{array} \right ] \end{equation} Then we compute the eigenvalues of $M$, and we obtain the two roots, positive and negative.

You can easily do this using:


In [ ]:
M = [0 2;  # creating the companion matrix
     1 0] 
(roots, vec) = eig(M)  # computing the eigenvalues
print(roots[1] - sqrt(2))

As you can see, this is the method used by Julia (and MATLAB) to compute squared roots.

3.a You will write a function that evaluates $p(x_j) = \sum_{i=0}^n \alpha_i x^i_j$, the input of the function are a vector $\mathbf{x}$, such that $\mathbf{x}_j = x_j$, and a vector $\mathbf{alpha}$ that contains the coefficients $\alpha_i$ such that $\alpha_n \neq 0$. Your function should support element-wise evaluation.

Hint: you can use something like: p += alpha[i]*x.^(i-1).


In [ ]:
function evalPolynomial(x, alpha)
    # complete the function
    return p 
end

As a example, if $\mathbf{alpha} = [1\, 3\, 4\, 1]$, then the corresponding polynomial is $p(x) = 1 + 3x + 4x^2 + x^3$. The output of your function for $\mathbf{x} = [0 \, 1\, 3]$ and $\mathbf{alpha} = [1\, 3\, 4\, 1]$ should be a vector given by $[p(0)\, p(1) \, p(3)] = [1 \, 9 \, 73 ]$.

3.b You will write a function that computes the companion matrix for a polynomial of deegre $n$ given by $p(x) = \sum_{i=0}^n \alpha_i x^i$, such that the coefficients are stored in a vector $\alpha$. We suppose that $\alpha_n \neq 0$.

Hint: you may find useful the ones(n,1) function that creates a column vector full of ones; and the diagm function that creates a diagonal matrix (remember that the first subdiagonal is indexed by $-1$). You may want to use the slicing operator $M[:,end]$ to modify the last colum of your matrix.


In [ ]:
function companionMatrix(alpha)
    n = size(alpha[:],1)-1  # the deegre of the polynomial
    # build M with the subdiagonal
    # replace the last colum of the matrix by the normalized coefficients of the polynomail
    return M
end

3.c Using your function companionMatrix, you will implement a root finding algorithm, that provides all the roots for a polynomial whose coefficients are stored in a vector alpha.


In [ ]:
function rootFindingCompanionMatrix(alpha)
    # write the body of the function in here (see the snipet above)
    return roots
end

Use your evalPolynomial function to test that your rootFinding algorithm is correct


In [ ]:
beta = [1 2 3 4 5 6];
evalPolynomial(rootFindingCompanionMatrix(beta), beta)

You should try with different betas in the script above. The output of the script should be small.